College and university rankings are lists of universities or equivalent institutions (e.g. liberal arts colleges) in higher education, an order determined by any combination of factors. Rankings can be based on subjectively perceived "quality," on some combination of empirical statistics, or on surveys of educators, scholars, students, prospective students or others. Some rankings evaluate institutions within a single country, while others attempt to assess major institutions in all countries worldwide. Rankings are often consulted by prospective students and their parents in the university and college admissions process.
In addition to rankings of institutions, there are also rankings of specific academic programs, departments, and schools. Rankings are conducted by magazines and newspapers and in some instances by academic practitioners. (See, for example, law school rankings in the United States.)
There has been much debate since the late 1990s about both the usefulness and political correctness of college rankings in the United States. Some higher education experts, like Kevin Carey of Education Sector, have argued that such rankings as the U.S. News and World Report's college rankings system is merely a list of criteria that mirrors the superficial characteristics of elite colleges and universities. According to Carey, "[The] U.S. News ranking system is deeply flawed. Instead of focusing on the fundamental issues of how well colleges and universities educate their students and how well they prepare them to be successful after college, the magazine's rankings are almost entirely a function of three factors: fame, wealth, and exclusivity." He suggests that there are more important characteristics parents and students should research to select colleges, such as how well students are learning and how likely students are to earn a degree.[1]
A 2010 University of Michigan study has confirmed that the rankings in the United States have significantly affected colleges' applications and admissions.[2] In the United Kingdom, several newspapers publish league tables which rank universities.
Several regional organizations (arranged alphabetically here) provide worldwide rankings, including the following:
The Academic Ranking of World Universities compiled by the Shanghai Jiao Tong University, was a large-scale Chinese project to provide independent rankings of universities around the world primarily to measure the gap between Chinese and "world class" universities. The results have often been cited by The Economist magazine in ranking universities of the world.[3] As with all rankings, there are issues of methodology, and one of the primary criticisms of the ranking is its bias towards the natural sciences, over other subjects and English language science journals.[4] This is evidenced by the inclusion of criteria such as the volume of articles published by Science or Nature (both journals devoted to the natural sciences and published in English), or the number of Nobel Prize winners (which are predominantly awarded to the physical sciences) and Fields Medalists (mathematics).[4] Furthermore, the ranking does not take into the account whether those winners are still associated with the institutions nor does it consider where the award-winning works were performed. As a result, it creates a superficial phantom award-counting game in favor of older and more established institutions even though these institutions may not have active winners in their faculty rosters, and of the rich American institutions that attract prize-winners with big financial rewards even though no award-winning work was done there. Counting the number of articles in Nature and Science as one of the major criteria in ranking institutions appears highly superficial because many award-winning works were not published in these journals. In addition to the criticisms, a 2007 paper from the peer-reviewed journal Scientometrics finds that the results from the Shanghai university rankings are irreproducible.[5]
A ranking of university and college web presence, the G-Factor methodology counts the number of links only from other university websites relying solely on Google's search engine. The G-Factor is an indicator of the popularity or importance of each university's website from the combined perspectives of the creators of many other university websites. It is therefore claims to be a kind of extensive and objective peer review of a university through its website – in social network theory terminology, the G-Factor measures the centrality of each university's website in the network of university websites.[6]
Global University Ranking is ranking of over 400 "world-known" universities by the RatER, a Russian-based non-commercial independent rating agency supported by the academic society of Russia.[7][8] The methodology uses the pool of universities from what it has determined are the four main global rankings (Academic Rankings of World Universities, HEEACT, Times-QS, and Webometrics) and utilizes a pool of "experts" formed by project officials and managers to determine the rating scales for every indicator of performance of the universities in seven areas including academic performance, research performance, faculty expertise, resource availability, socially significant activities of graduates, international activities of the university, and international opinion of foreign universities. Each expert performs his own evaluation of performance indicators of all the universities. The final evaluation of each indicator is determined as the average of all the expert evaluations.[9]
The Performance Ranking of Scientific Papers for World Universities is a bibliometric based ranking produced by the Higher Education Evaluation and Accreditation Council of Taiwan.[10] The performance measures are composed of eight indicators (11 years articles, Current articles, 11 years citations, Current citations, Average citations, H-index, Highly cited papers, High Impact journal articles) representing three different criteria of scientific papers performance: research productivity, research impact, and research excellence. This project employs bibliometric methods to analyze and rank the scientific papers performances of the top 500 worlds' universities and the top 300 worlds' universities among six fields.
The HEEACT performance ranking system is designed for research universities. The objective indicators used in this ranking system are designed measure both long-term and short-term research performance of each university. The 2007 ranking methodology was determined to favor universities with medical schools, and in response, HEEACT added additional fields of ranking to the ranking. The six fields based rankings are based on the subject categorization of WOS, including Agriculture & Environment Sciences (AGE)、Clinical Medicine (MED)、Engineering, Computing & Technology (ENG)、Life Sciences (LIFE)、Natural Sciences (SCI) and Social Sciences (SOC).
A human competitiveness index & analysis by the Human Resources & Labor Review (HRLR) is published annually in Chasecareer Network ( ChaseCareer.Net ). The non-partisan, neutral college ranking system created by a team of multinational experts is based on Human Resources & Labor Review Index (HRI and LRI), which provides measurement over world's Top 300 universities graduates performance. The ranking system is under intensive development.[11]
In August 2006, the American magazine Newsweek published a ranking of the Top 100 Global Universities, utilizing selected criteria from two pre-existing rankings (the Academic Ranking of World Universities by Shanghai Jiao Tong University and The Times Higher Education-QS rankings), with the additional criterion of library holdings (number of volumes). It aimed at taking into account openness and diversity, as well as distinction in research'.[12]
SCImago Research Group's SCImago institutions rankings: 2009 world report ranks all institutions which had more than 100 outputs indexed in the multinational publishing giant Elsevier's Scopus database in 2007. The ranking comprises 1,527 higher education institutions, 335 health organisations, 216 government organisations, 29 private bodies and 17 other organisations. SCImago derives five measures from the Scopus database: total outputs, cites per document (which are heavily influenced by field of research as well as research quality), international collaboration, normalised Scimago journal rank and normalised citations per output but the total output is the only criterion used for ranking.
From 2004 to 2009 Times Higher Education, a British publication on higher education, published the annual THE – QS World University Rankings in association with QS Quacquarelli Symonds. Times Higher Education published a table of the top 200 universities in the world and QS approximately 500 online, in book form, and via media partners such as US News and World Report.[13]
Many more non-US universities (especially British) populate the upper tier of the QS ranking than appear prominently in other ranking systems.[14]. A distinctive feature of this system is its use of peer review derived from over 9000 scholars and academics in various fields, and from over 3000 recruiters of graduates from around the world.[15]. Other criteria which this system uses include international staff and student numbers, citations data from Scopus [16], and faculty/student ratio. The direct connection between academic opinion and success in the Rankings is regarded by QS as one of their biggest strengths, and is seen as positive by universities which do well in them.
These rankings are published in the United States by US News & World Report as the "World's Best Universities."[17]
On 30 October 2009, Times Higher Education broke with QS and signed an agreement with Thomson Reuters to provide the data for a new set of world university rankings, which will be called Times Higher Education World University Rankings. The methodology for these rankings is under development. THE has stated that academic opinion will form part of its new offering.
QS, which has collected and analysed the rankings data for the past six years and retains all the intellectual property in them, will continue to publish the rankings (now known as the QS World University Rankings) online and in book form, and via media partners including US News & World Report, Chosun Ilbo and Nouvel Observateur. QS will continue to use data from Scopus, part of the Elsevier publishing group, its own data gathering, and its own peer and recruiter review, to produce these rankings on a consistent basis. QS is adding to them with new outputs such as the Asian University Rankings,[18] published first in 2009 and for the second time in 2010 (see below). QS managing director Nunzio Quacquarelli has stated that he welcomes more competition in this arena, as no ranking system can ever be completely right for all purposes.
In two subsequent research papers [19][20] published by Academic Leadership (2009), then in an article [21] published by Times Higher Education, Paul Z. Jambor of Korea University established the connection between universities developing an unfavorable image and falling, or at least ceasing to rise, in the Rankings, because of their use of reputation data. Any unfavorable image developed by a nation's universities can harm their collective rankings. For this reason, universities worldwide should consider adhering to internationally accepted standards so that they don not run the risk of sliding in the ranks on the international front.
The QS Rankings have been criticised [22] for placing too much emphasis on peer review. Some people have expressed concern about the manner in which the peer review has been carried out.[23] In a report,[24] Peter Wills from the University of Auckland, New Zealand wrote of the Times Higher Education-QS World University Rankings:
“But we note also that this survey establishes its rankings by appealing to university staff, even offering financial enticements to participate (see Appendix II). Staff are likely to feel it is in their greatest interest to rank their own institution more highly than others. This means the results of the survey and any apparent change in ranking are highly questionable, and that a high ranking has no real intrinsic value in any case. We are vehemently opposed to the evaluation of the University according to the outcome of such PR competitions.”
At the 16th Annual New Zealand International Education Conference held at Christchurch, New Zealand in August 2007, Simon Marginson presented a paper[25] that outlines what he regards as the fundamental flaws underlying the Times Higher Education-QS World University Rankings. A similar article[26] (also published by the same author) appeared in The Australian newspaper in December 2006. Some of the points mentioned include:
“Half of the THES index is comprised by existing reputation: 40 per cent by a reputational survey of academics (‘peer review’), and another 10 per cent determined by a survey of 'global employers'. The THES index is too easily open to manipulation as it is not specified who is surveyed or what questions are asked. By changing the recipients of the surveys, or the way the survey results are factored in, the results can be shifted markedly.
- The pool of responses is heavily weighted in favour of academic 'peers' from nations where The Times (sic) is well-known, such as the UK, Australia, New Zealand, Malaysia and so on.
- It’s good when people say nice things about you, but if it is better when those things are true. It is hard to resist the temptation to use the THES rankings in institutional marketing, but it would be a serious strategic error to assume that they are soundly based.
- Results have been highly volatile. There have been many sharp rises and falls, especially in the second half of the THES top 200 where small differences in metrics can generate large rankings effects. Fudan in China has oscillated between 72 and 195, RMIT in Australia between 55 and 146. In the US, Emory has risen from 173 to 56 and Purdue fell from 59 to 127.”
In contrast to academic rankings, the Professional Ranking of World Universities established in 2007 by the École nationale supérieure des mines de Paris intends to measure the efficiency of each university to generate leading business professionals. Its main compilation criterion is the number of Chief Executive Officers (or number 1 executive equivalent) in the among the Fortune Global 500.[27]
The Webometrics Ranking of World Universities is produced by the Cybermetrics Lab (CCHS), a unit of the National Research Council (CSIC), the main public research body in Spain. It offers information about more than 12,000 universities according to their web-presence (a computerised assessment of the scholarly contents and visibility and impact of the whole university webdomain).
The Webometrics Ranking is built from a database of over 20,000 higher education institutions. The Top 12,000 universities are shown in the main rank, but even more are covered in the regional lists. Institutions from developing countries benefit from this policy as they obtain knowledge of their current position even if they are not World-Class Universities.
The ranking started in 2004 and it is based on a composite indicator that takes into account both the volume of the Web contents and the visibility and impact of this web publications according to the number of external inlinks they received. The ranking is updated every January and July, providing Web indicators for universities worldwide. This approach takes into account the wide range of scientific activities represented in the academic websites, frequently overlooked by the bibliometric indicators.
Webometric indicators are provided to show the commitment of the institutions to Web publication. Thus, Universities of high academic quality may be ranked lower than expected due to a restrained web publication policy. The results show a high correlation with others Rankings but also a larger than expected presence of US & Canada universities in the Top 200, delayed positions of small and medium size biomedical institutions as well as many German, French, Italian and Japanese universities not in top ranks, probably because these countries have large independent research councils (CNRS, Max Planck, CNR).
Another ranking is by the Research Center for Chinese Science Evaluation at Wuhan University. The ranking is based on Essential Science Indicators (ESI), which provides data of journal article publication counts and citation frequencies in over 11,000 journals around the world in 22 research fields.[28]
Regional and national rankings are carried out in Africa, Asia, Europe, North America, South America and Oceania.
Cairo University is ranked between 401 to 500 in the QS World University Ranking for 2009.[29]
Academic League tables of South African universities are largely based on international university rankings, because there have not as yet been published any specifically South African rankings.
QS Quacquarelli Symonds published the Asian University Rankings for the first time in 2009, with the second edition in 2010 [30]. The Rankings use some of the same data as the QS World University Rankings alongside other material, for example on the number of exchange students arriving at and coming from each university. The Asian University Rankings list the top 200 universities in Asia, with the University of Hong Kong in the number one spot in 2010.
The Chinese Academy of Management Science produces the Chinese university rankings.
There are also many rankings based on university's billionaire alumni, see:
Korean Council for University Education, established 2009, is an organization that evaluates universities in South Korea. Data is currently processing.
Higher Education Commission in Pakistan releases annual ranking of universities in Pakistan, based on strict standards.[31]
Magazines like India Today, Outlook, Mint, Dataquest and EFY conduct annual surveys with listed rankings in the major disciplines. For more information, visit Engineering college rankings in India.
Academic rankings in the Philippines are conducted by the Professional Regulation Commission and the Commission on Higher Education, and this is based on the average passing rates in all courses of all Philippine colleges and universities in the board tests.[32][33]
The European Commission also weighed in on the issue, when it compiled a list of the 22 universities in the EU with the highest scientific impact,[34] measuring universities in terms of the impact of their scientific output. This ranking was compiled as part of the Third European Report on Science & Technology Indicators,[35] prepared by the Directorate General for Science and Research of the European Commission in 2003 (updated 2004).
Being an official document of the European Union (from the office of the EU commissioner for science and technology), which took several years of specialist effort to compile, it can be regarded as a highly reliable source (the full report, containing almost 500 pages of statistics is available for download free from the EU website). Unlike the other rankings, it only explicitly considers the top institutions in the EU, but ample comparison statistics with the rest of the world are provided in the full report. The report says that "University College London comes out on top in both publications (the number of scientific publications produced by the university) and citations (the number of times those scientific publications are cited by other researchers)" however the table lists the top scoring university as "Univ London" indicating that the authors counted the scientific output of the University of London, rather than its individual constituent colleges.
In this ranking, the top two universities in the EU are also Cambridge and Oxford, as in the Jiao Tong and Times ranking. This ranking, however, stresses more the scientific quality of the institution, as opposed to its size or perceived prestige. Thus smaller, technical universities, such as Eindhoven (Netherlands) and Munich (Germany) are ranked third and forth, behind Cambridge, and followed by University of Edinburgh in the UK. The report does not provide a direct comparison between EU and universities in the rest of the world – although it does compute complex scientific impact score, measured against a world average.
In December 2008, the European Commission has published a call for tenders, inviting bidders to design and test a new multi-dimensional university ranking system with global outreach. The first results of the envisaged pilot project will be available in the first half of 2011.[36]
CHE ExcellenceRanking
Since 2007, the CHE ExcellenceRanking is published by the Center for Higher Education Development in germany. The Ranking includes the disciplines of biology, chemistry, mathematics and physics as well as psychology, political science and economics. The ranking is designed to support the search for master’s or doctoral programmes at higher education institutions (HEIs). Alongside this, the CHE wants to highlight the research strengths of European HEIs and provide those HEIs listed in the ranking with ideas for the further improvement of their already excellent programmes.
Le Nouvel Observateur[37] and other popular magazines occasionally offer rankings of universities, "Grandes écoles" and their preparatory schools, the "Prépas".
CHE UniversityRanking
The English version of the German CHE University Ranking is provided by the DAAD.
CHE ExcellenceRanking
In December 2007, a new ranking was published in Germany from the Centre for Higher Education Development. The CHE "Ranking of Excellent European Graduate Programmes" (CHE ExcellenceRanking for short) included the disciplines of biology, chemistry, mathematics and physics. The ranking is designed to support the search for master’s or doctoral programmes at higher education institutions (HEIs). Alongside this, the CHE wants to highlight the research strengths of European HEIs and provide those HEIs listed in the ranking with ideas for the further improvement of their already excellent programmes.
CHE ResearchRanking
Every year, the CHE also publishes a ResearchRanking showing the research strengths of German universities. The CHE ResearchRanking is based on the research-related data of the CHE UniversityRanking.
The Sunday Times compiles a league of Irish universities based on a mix of criteria, for example:
Every year La Repubblica, in collaboration with CENSIS compiles a league of Italian universities.
A ranking of Romanian universities was published in 2006 and 2007 by the Ad Astra association of Romanian scientists.[38]
The swissUp Ranking provided a ranking for Swiss university and polytechnic students until 2004. The swissUp Ranking is no longer conducted. Currently Switzerland has no ranking or evaluation system of its universities.
The Research Assessment Exercises (RAE) are attempts by the UK government to evaluate the quality of research undertaken by British Universities. Each subject, called a unit of assessment is given a ranking by a peer review panel. The rankings are used in the allocation of funding each university receives from the government. The last assessment was made in 2001 and 2008. The RAE provides quality ratings for research across all disciplines. Panels use a standard scale to award a rating for each submission. Ratings range from 1 to 5*, according to how much of the work is judged to reach national or international levels of excellence. Higher education institutions (HEIs) which take part receive grants from one of the four higher education funding bodies in England, Scotland, Wales and Northern Ireland.
There are several annual University and College Rankings:
Standards of undergraduate teaching are assessed by the Quality Assurance Agency for Higher Education (QAA), an independent body established by the UK's universities and other higher education institutions in 1997. The QAA was under contract to the Higher Education Funding Council for England to assess quality for universities in England in a system of subject review. This replaced a previous system of Teaching Quality Assessments (TQAs) which aimed to assess the administrative, policy and procedural framework within which teaching took place did directly assess teaching quality. As this system of universal inspection was hugely burdensome, it was replaced by a system of information provision, one part of which is a national student survey which has been run three times, and publishes scores which have been used by the league table industry. The rankings have had to create artificial differences, however, as students are generally very satisfied.
Ministry of Education and Science of Ukraine performs official yearly university evaluations.[39] Zerkalo Nedeli newspaper ranked the top 200 Ukrainian universities in 2007.[40] Kyiv Student Council ranks universities on criteria of students` satisfaction.[41]
In Argentina the evaluation, accreditation, and ranking of the higher education programs is made by the National Commission for University Evaluation and Accreditation.[42]
See Brazil University Rankings
Maclean's, a Canadian news magazine, publishes an annual ranking of Canadian Universities, called the Maclean’s University Rankings.[43] The criteria used by the magazine include characteristics of the student body, classes, faculty, finances, the library, and reputation. The rankings are split into three categories: primarily undergraduate (schools that focus on undergraduate studies with few to no graduate programs), comprehensive (schools that have both extensive undergraduate studies and an extensive selection of graduate programs), and medical doctoral (schools that have a professional medical program and a selection of graduate programs).[44]
These rankings have received scrutiny and criticism from universities. For example, the University of Calgary produced a formal study examining the methodology of the ranking, illuminating the factors that determined the university's rank, and criticizing certain aspects of the methodology. In addition, the University of Alberta and the University of Toronto have both expressed displeasure over Maclean's ranking system. A notable difference between rankings in the United States and Maclean's rankings, however, is that Maclean's does not include privately-funded universities in its rankings. However, the vast majority and the best-known universities in Canada are publicly funded.
Beginning in September 2006, a number (over 20) of Canadian universities, including several of the most prestigious and largest universities such as the University of Toronto, University of British Columbia, University of Alberta and McMaster University, jointly refused to participate in Maclean's survey.[45] The president of the University of Alberta, Indira Samarasekera, wrote of this protest that Maclean's initially filed a "Freedom of Information" request but that "it was too late" for the universities to respond. Samarasekera further stated, "Most of [the universities] had already posted the data online, and we directed Maclean’s staff to our Web sites. In instances where the magazine staff couldn’t find data on our Web site, they chose to use the previous year’s data."[46]
The best-known American college and university rankings [47] have been compiled since 1983 by the magazine U.S. News & World Report and are based upon data which U.S. News collects from each educational institution either from an annual survey sent to each school or from the school's website. It is also based upon opinion surveys of university faculty and administrators who do not belong to the school.[48] The college rankings were not published in 1984, but were published in all years since. The precise methodology used by the U.S. News rankings has changed many times, and the data are not all available to the public, so peer review of the rankings is limited. As a result, many other rankings arose and seriously challenged the result and methodology of US News's ranking, as shown in other rankings of US universities section below. The U.S. News rankings, unlike some other such lists, create a strict hierarchy of colleges and universities in their "top tier,". Rather than ranking only groups or "tiers" of schools; the individual schools' order changes significantly every year the rankings are published. The U.S News Tiers rank from Tier 1, the highest, to Tier 4, the lowest. The most important factors in the rankings are:
All these factors are combined according to statistical weights determined by U.S. News. The weighting is often changed by U.S. News from year to year, and is not empirically determined (the National Opinion Research Center methodology review said that these weights "lack any defensible empirical or theoretical basis"). Critics have charged that U.S. News intentionally changes its methodology every year so that the rankings change and they can sell more magazines. The first four of the listed factors account for the great majority of the U.S. News ranking (80%, according to U.S. News's 2005 methodology), and the "reputational measure" (which surveys high-level administrators at similar institutions about their perceived quality ranking of each college and university) is especially important to the final ranking (accounting by itself for 25% of the ranking according to the 2005 methodology).[49]
A New York Times article reported that, given the U.S. News weighting methodology, "it's easy to guess who's going to end up on top: the Big Three, Harvard, Yale and Princeton round out the first three essentially every year. In fact, when asked how he knew his system was sound, Mel Elfin, the rankings' founder, often answered that he knew it because those three schools always landed on top. When a new lead statistician, Amy Graham, changed the formula in 1999 to one she considered more statistically valid, the California Institute of Technology jumped to first place. Ms. Graham soon left, and a slightly modified system pushed Princeton back to No. 1 the next year."[50] A San Francisco Chronicle article argues that "almost all of US News factors are redundant and can be boiled down to one characteristic: the size of the college or university's endowment."[51]
Some higher education experts, like Kevin Carey of Education Sector, have argued that U.S. News and World Report's college rankings system is merely a list of criteria that mirrors the superficial characteristics of elite colleges and universities. According to Carey, "[The] U.S. News ranking system is deeply flawed. Instead of focusing on the fundamental issues of how well colleges and universities educate their students and how well they prepare them to be successful after college, the magazine's rankings are almost entirely a function of three factors: fame, wealth, and exclusivity." He suggests that there are more important characteristics parents and students should research to select colleges, such as how well students are learning and how likely students are to earn a degree.[1]
Research at the University of Michigan by Michael Bastedo and Nicholas Bowman, which analyzes the effects of the U.S. News & World Report rankings, has shown lasting effects of the rankings on college applications and admissions by students in the top 10% of their class.[2] In addition, they have found that rankings influence survey assessments of reputation by college presidents at peer institutions, such that rankings and reputation are becoming much more similar over time.[52]
The National Research Council ranks the doctoral research programmes of universities across the US but the last time it produced a report was in 1995. There is no announced date for the next report but data collection for it began in 2006.[53]
The Faculty Scholarly Productivity Index by Academic Analytics ranks universities based on faculty publications, citations, research grants and awards.[54] A total of 354 institutions are studied.
A research ranking of American universities is researched and published in the Top American Research Universities by The Center for Measuring University Performance. The list has been published since 2000. The measurement used in this report is based on data such as research publications, citations, recognitions and funding, as well as undergraduate quality such as SAT scores. The information used can be found in public-accessible materials, reducing the possibility of manipulation. The research method is consistent from year to year and any changes are explained in the publication itself. References from other studies are cited.[55]
The Washington Monthly's "College Rankings", last published in 2009, began as a research report in 2005 and introduced its first official rankings in the September 2006 issue. It offers American university and college rankings [56] based upon the following criteria:
In 2008, Forbes.com published a list of "America's Best Colleges."[58] Forbes updated the list in 2009.[59] The Forbes rankings use the listing of alumni published in Who's Who in America, student evaluations of professors from ratemyprofessors.com, self-reported salaries of alumni from payscale.com, four-year graduation rates, numbers of students and faculty receiving "nationally competitive awards," and four-year accumulated student debt to calculate the rankings.[60][61] The 2009 rankings were notable for their inclusion of less commonly recognized colleges in their rankings as well as their higher rankings of US military academies; however, they were also strongly criticized in great part due to their heavy reliance on highly subjective sources (50% of the rankings depend on Who's Who in America and ratemyprofessor.com) [62] as well as for their significantly lower rankings given to many nationally recognized colleges and research institutions, including members of the Ivy League. The validity of rankings in which the federal service academies which are completely funded with taxpayer money are compared to institutions that must find their own funds is also open to question.
Forbes also published "Top Colleges For Getting Rich." These rankings are considered questionable because they were partly based upon anonymous readers' votes. It ranks schools based on figures obtained by payscale.com, which in turn collects data through self-reported earnings of graduates.[63]
Avery et al. pioneered the use of choice modelling to rank colleges. Rather than ranking programs by traditional criteria, their National Bureau of Economic Research working paper, "A Revealed Preference Ranking of U.S. Colleges and Universities," modeled the college ranking landscape based on a statistical analysis of the decisions of 3,240 students who applied to college in 1999.[64] A similar approach was put into use starting in 2009 for annual rankings produced by MyChances.net,[65] which states that its method is based on Avery's.[66] Avery et al.'s approach was as follows: if a student was admitted to multiple colleges, the college they attended was modeled as the winner, and the colleges that they did not attend were modeled as the losers. An Elo point system was used to assign points based on each win or loss, and the colleges were ultimately ranked based on their Elo points. A useful consequence of the use of Elo points is that they can be used to estimate the frequency with which a student, upon being admitted to two schools, will choose one over the other. Additionally, the authors felt that their ranking was less subject to manipulation when compared to conventional rankings.
Other organizations which compile general US annual college and university rankings include the Fiske Guide to Colleges, Princeton Review, and College Prowler. Many specialized rankings are available in guidebooks for undergraduate and graduate students, dealing with individual student interests, fields of study, and other concerns such as geographical location, financial aid, and affordability.
One commercial ranking service is Top Tier Educational Services.[67] Student centered criteria are used and despite the two-year completely updated study, the rankings are updated every quarter from new input data. The criteria include subjective data, such as peer assessment, desirability, and objective data, such as ACT and SAT scores, and the high school GPA of admitted students.
Such new rankings schemes measure what decision makers think as opposed to why they make the decisions they do. They may or may not augment these statistics for reputation with hard, qualitative information. The authors discuss their rankings system and methodology with students but do not share their specific research tools or formulas. Again, the problem with such a ranking that uses subjective opinions is that it is very prone to personal bias, prejudice and bounded rationality. Also, public universities will be penalized because besides an academic mission, they have a social mission. They simply cannot charge as much money, or be as selective, as private universities. Also, the fact that the ranking service is a commercial company raises the question whether there are any hidden business motives behind its rankings.
Among the rankings dealing with individual fields of study is the Philosophical Gourmet Report or "Leiter Report" (after its founding author, Brian Leiter, then of the University of Texas at Austin, now[update] University of Chicago), a ranking of philosophy departments. This report has been at least as controversial within its field as the general U.S. News rankings, attracting criticism from many different viewpoints. Notably, practitioners of continental philosophy, who perceive the Leiter report as unfair to their field, have compiled alternative rankings.
The Gourman Report, which was last published in 1996, ranked the quality of undergraduate majors and graduate programs.
There also exist Gallup polls that ask American adults, "All in all, what would you say is the best college or university in the United States?"[68]
Global Language Monitor produces a "TrendTopper MediaBuzz" rankings of the Top 225 US colleges and universities twice a year, according to their appearances on the internet, in blogs, social media, and global electronic and print media.[69] It publishes overall results for both University and College categories using the Carnegie Foundation for the Advancement of Teaching’s classifications as the basis to distinguish between Universities and Liberal Arts Colleges. The rankings include 125 top universities, the 100 top colleges, the change in the rankings over time, a "Predictive Quantities Indicator" (PQI) Index number (for relative rankings), as well as rankings by Momentum (yearly and 90-day snapshots), and rankings by State. Most recently, the schools were ranked on November 1, 2009, with the last day of 2008 as the base, with two interim snapshots in 2009. The PQI index is produced by Global Language Monitor's proprietary PQI algorithm,[70] which has been criticized by some linguists for its use in a highly publicized counting of the total number of English words.[71][72][73][74] The Global Language Monitor also sells the TrendTopper MediaBuzz Reputation Management solution for higher education for which "colleges and universities can enhance their standings among peers".[75] The Global Language Monitor states that it "does not influence the Higher Education rankings in any way".[76]
Mexican colleges, universities and other research institutions have been compared in the Estudio Comparativo de Universidades Mexicanas (ECUM) produced within the Universidad Nacional Autónoma de México(UNAM).[77] ECUM provides data on intitutional participation in articles on ISI Web of Knowledge indexed journals; faculty participation in each of the three levels of Mexico's National Researchers System (SNI); graduate degrees within CONACYT's (National Council of Science and Technology) register of quality graduate programs (PNPC); and number of academic research bodies (cuerpos academicos) according to the Secretariat of Public Education (SEP) program PROMEP.
ECUM provides online access to data for 2007 and 2008 through the Explorador de datos del ECUM (ExECUM). Institutional data can be visualized through three options:
ExECUM has been designed in order to allow users to establish comparison types and levels which they consider relevant. For this purpose data is presented in its raw form and virtually no indicators or ponderations are built within this system. Users can establish relationships between variables and build their own indicators according to their own need and analythical perspectives.
Based on this comparative study project, the Dirección General de Evaluación Institucional at UNAM, creators of ECUM, have published a two reports called Desempeño de Universidades Mexicanas en la Función de Investigación: Estudio Comparativo and Estudio comparativo de universidades mexicanas. Segundo reporte: desempeño en investigación y docencia providing an analysis of the data for 2007 and 2008.
American college and university ranking systems have drawn criticism from within and outside higher education in Canada and the United States. Some institutions critical of the ranking systems include Reed College, Alma College, Mount Holyoke College, St. John's College, Earlham College, MIT, and Stanford University.
On 19 June 2007, during the annual meeting of the Annapolis Group, members discussed the letter to college presidents asking them not to participate in the "reputation survey" section of the U.S. News and World Report survey (this section comprises 25% of the ranking). As a result, "a majority of the approximately 80 presidents at the meeting said that they did not intend to participate in the U.S. News reputational rankings in the future."[78] However, the decision to fill out the reputational survey or not will be left up to each individual college as: "the Annapolis Group is not a legislative body and any decision about participating in the US News rankings rests with the individual institutions."[79] The statement also said that its members "have agreed to participate in the development of an alternative common format that presents information about their colleges for students and their families to use in the college search process."[79] This database will be web based and developed in conjunction with higher education organizations including the National Association of Independent Colleges and Universities and the Council of Independent Colleges.
U.S. News and World Report editor Robert Morse issued a response on 22 June 2007, in which he argued:
"in terms of the peer assessment survey, we at U.S. News firmly believe the survey has significant value because it allows us to measure the "intangibles" of a college that we can't measure through statistical data. Plus, the reputation of a school can help get that all-important first job and plays a key part in which grad school someone will be able to get into. The peer survey is by nature subjective, but the technique of asking industry leaders to rate their competitors is a commonly accepted practice. The results from the peer survey also can act to level the playing field between private and public colleges."[80]
In reference to the alternative database discussed by the Annapolis Group, Morse also argued:
"It's important to point out that the Annapolis Group's stated goal of presenting college data in a common format has been tried before [...] U.S. News has been supplying this exact college information for many years already. And it appears that NAICU will be doing it with significantly less comparability and functionality. U.S. News first collects all these data (using an agreed-upon set of definitions from the Common Data Set). Then we post the data on our website in easily accessible, comparable tables. In other words, the Annapolis Group and the others in the NAICU initiative actually are following the lead of U.S. News."[80]
In 1996, according to Gerhard Casper, then-president of Stanford University, US News & World Report simply changes the formulas used to calculated financial resources:
Knowing that universities – and, in most cases, the statistics they submit – change little from one year to the next, I can only conclude that what are changing are the formulas the magazine's number massagers employ. And, indeed, there is marked evidence of that this year. In the category "Faculty resources," even though few of us had significant changes in our faculty or student numbers, our class sizes, or our finances, the rankings' producers created a mad scramble in rank order [...data...]. Then there is "Financial resources," where Stanford dropped from #6 to #9, Harvard from #5 to #7. Our resources did not fall; did other institutions' rise so sharply? I infer that, in each case, the formulas were simply changed, with notification to no one, not even your readers, who are left to assume that some schools have suddenly soared, others precipitously plummeted.[81]